Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
1.
J Clin Epidemiol ; 136: 96-132, 2021 08.
Artículo en Inglés | MEDLINE | ID: covidwho-1157464

RESUMEN

OBJECTIVE: To compare the inference regarding the effectiveness of the various non-pharmaceutical interventions (NPIs) for COVID-19 obtained from different SIR models. STUDY DESIGN AND SETTING: We explored two models developed by Imperial College that considered only NPIs without accounting for mobility (model 1) or only mobility (model 2), and a model accounting for the combination of mobility and NPIs (model 3). Imperial College applied models 1 and 2 to 11 European countries and to the USA, respectively. We applied these models to 14 European countries (original 11 plus another 3), over two different time horizons. RESULTS: While model 1 found that lockdown was the most effective measure in the original 11 countries, model 2 showed that lockdown had little or no benefit as it was typically introduced at a point when the time-varying reproduction number was already very low. Model 3 found that the simple banning of public events was beneficial, while lockdown had no consistent impact. Based on Bayesian metrics, model 2 was better supported by the data than either model 1 or model 3 for both time horizons. CONCLUSION: Inferences on effects of NPIs are non-robust and highly sensitive to model specification. In the SIR modeling framework, the impacts of lockdown are uncertain and highly model-dependent.


Asunto(s)
COVID-19/prevención & control , Control de Enfermedades Transmisibles/métodos , Modelos Estadísticos , Distanciamiento Físico , Cuarentena/métodos , Europa (Continente) , Humanos , SARS-CoV-2
2.
Eur J Epidemiol ; 35(8): 733-742, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: covidwho-708706

RESUMEN

Forecasting models have been influential in shaping decision-making in the COVID-19 pandemic. However, there is concern that their predictions may have been misleading. Here, we dissect the predictions made by four models for the daily COVID-19 death counts between March 25 and June 5 in New York state, as well as the predictions of ICU bed utilisation made by the influential IHME model. We evaluated the accuracy of the point estimates and the accuracy of the uncertainty estimates of the model predictions. First, we compared the "ground truth" data sources on daily deaths against which these models were trained. Three different data sources were used by these models, and these had substantial differences in recorded daily death counts. Two additional data sources that we examined also provided different death counts per day. For accuracy of prediction, all models fared very poorly. Only 10.2% of the predictions fell within 10% of their training ground truth, irrespective of distance into the future. For accurate assessment of uncertainty, only one model matched relatively well the nominal 95% coverage, but that model did not start predictions until April 16, thus had no impact on early, major decisions. For ICU bed utilisation, the IHME model was highly inaccurate; the point estimates only started to match ground truth after the pandemic wave had started to wane. We conclude that trustworthy models require trustworthy input data to be trained upon. Moreover, models need to be subjected to prespecified real time performance tests, before their results are provided to policy makers and public health officials.


Asunto(s)
Infecciones por Coronavirus/mortalidad , Predicción/métodos , Unidades de Cuidados Intensivos/estadística & datos numéricos , Pandemias/prevención & control , Neumonía Viral/mortalidad , Ocupación de Camas , Betacoronavirus , COVID-19 , Humanos , Unidades de Cuidados Intensivos/provisión & distribución , Modelos Estadísticos , Mortalidad/tendencias , New York/epidemiología , Salud Pública , SARS-CoV-2
3.
No convencional en Inglés | WHO COVID | ID: covidwho-728592

RESUMEN

Epidemic forecasting has a dubious track-record, and its failures became more prominent with COVID-19. Poor data input, wrong modeling assumptions, high sensitivity of estimates, lack of incorporation of epidemiological features, poor past evidence on effects of available interventions, lack of transparency, errors, lack of determinacy, looking at only one or a few dimensions of the problem at hand, lack of expertise in crucial disciplines, groupthink and bandwagon effects and selective reporting are some of the causes of these failures. Nevertheless, epidemic forecasting is unlikely to be abandoned. Some (but not all) of these problems can be fixed. Careful modeling of predictive distributions rather than focusing on point estimates, considering multiple dimensions of impact, and continuously reappraising models based on their validated performance may help. If extreme values are considered, extremes should be considered for the consequences of multiple dimensions of impact so as to continuously calibrate predictive insights and decision-making. When major decisions (e.g. draconian lockdowns) are based on forecasts, the harms (in terms of health, economy, and society at large) and the asymmetry of risks need to be approached in a holistic fashion, considering the totality of the evidence.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA